Way to add pytest options outside top level conftest? - python

I have a project that defines testing support modules, including py.test plugins in a package subdirectory like this:
bokeh/_testing
├── __init__.py
├── plugins
│ ├── __init__.py
│ ├── bokeh_server.py
│ ├── examples_report.jinja
│ ├── examples_report.py
│ ├── file_server.py
│ ├── implicit_mark.py
│ ├── integration_tests.py
│ ├── jupyter_notebook.py
│ ├── log_file.py
│ └── pandas.py
└── util
├── __init__.py
├── api.py
├── examples.py
├── filesystem.py
├── git.py
└── travis.py
Several of the plugins need to define new options with parser.addoption. I would like these calls to be made inside the respective plugin modules. However, if I do that, and include those plugins in a test file with, e.g.
# test_examples.py
pytest_plugins = (
"bokeh.testing.plugins.bokeh_server",
"bokeh.testing.plugins.examples_report",
)
# pytest.mark.examples test code here
Then pytest complains that any custom command line options are undefined:
(base) ❯ py.test -s -v -m examples --diff-ref FETCH_HEAD --report-path=examples.html
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: --diff-ref --report-path=examples.html
inifile: /Users/bryanv/work/bokeh/setup.cfg
rootdir: /Users/bryanv/work/bokeh
The only way I have found around this, is to collect ALL custom options in a single pytest_addoption in the top-level conftest.py:
# conftest.py
pytest_plugins = (
"bokeh.testing.plugins.implicit_mark",
"bokeh.testing.plugins.pandas",
)
# Unfortunately these seem to all need to be centrally defined at the top level
def pytest_addoption(parser):
parser.addoption(
"--upload", dest="upload", action="store_true", default=False, help="..."
)
# ... ALL other addoptions calls for all plugins here ...
As I said, this works, but is very inflexible in terms of code organization. It would be preferable to have a way for the options for the examples.py plugin to in the examples.py module, with the code it relates to.
Another possibility would presumably be to bring in all the plugins in the top-level conftest.py, but some of the plugins are very heavyweight, e.g. depend on Selenium, and I would not like to require that all that be installed just to run basic tests.
Is there another way to accomplish this, and if so, how?

As noted above, as of Pytest 3.6.2, options must be added "only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup."

Related

Test packaging in python projects

Project structure:
project
some_api
__init__.py
api1.py
api2.py
some-folder
some-helper-module.py
lib
some-libs
docs
some-docs
Dockerfile
README.md
What should be the position for various tests
unit tests
function tests for the API
performance tests using API like Locust
Possible Solution
In the parallel of project, I can have something like
test
unit_tests
test1.py
test2.py
functional_tests
f_test1.py
f_test2.py
perf_tests
locust-files
load_test1.py
load_test2.py
test-data
something.csv
Generally, this structure is usually followed, Hope it helps. All types of test should be inside the test module with seperate submodule. For more details you can visit here
├── app_name
│
├── app_name
│ ├── __init__.py
│ ├── folder_name
│ └── etc...
├── tests
│ ├── unit
│ └── integration
├── README.md
├── setup.py
└── requirements.txt
In our team we used to put the unit tests alongside the python files they refer to, and the integration and performance tests outside the project as they will test it almost as blackbox :
project
some_api
__init__.py
api1.py
api1_unit_testing.py
api2.py
api2_unit_testing.py
some-folder
some-helper-module.py
lib
some-libs
docs
some-docs
tests
profiling_performance.py
integration_testing.py
Dockerfile
README.md

Importing source files into test file

I have the following directory structure:
Project
├── README.md
├── app
│ ├── __init__.py
│ └── main.py
│
├── tests
│ ├── __init__.py
│ └── test_one.py
I want to import app.main in test_one. I went through similar stackoverflow questions and tried to add the path to the app folder in the test_one.py file as follows:
sys.path.append('/path to project/app')
However, I am getting the following error:
ModuleNotFoundError: No module named 'app'
How can I import the files from app into the test_one.py file? Is there a simple from ... import statement to achieve this?
You did not mention what unit testing framework you are using, but if you are using pytest, it already supports that kind of app/test organization where the tests are outside the application code.
setup.py
mypkg/
__init__.py
app.py
view.py
tests/
test_app.py
test_view.py
...
..
If you don’t have a setup.py file and are relying on the fact that
Python by default puts the current directory in sys.path to import
your package, you can execute python -m pytest to execute the tests
against the local copy directly, without using pip.
Your codes are already structured correctly. You don't need to manually set sys.path. You can use from app import main normally (see my sample test_one.py below). The only thing I would add is a pytest.ini to specify the test paths and the pattern for the test files.
directory structure:
Project
├── pytest.ini
├── app
│ ├── __init__.py
│ └── main.py
│
├── tests
│ ├── __init__.py
│ └── test_one.py
pytest.ini:
[pytest]
addopts = -v
console_output_style = count
python_files = test_*.py
testpaths = tests
sample main.py:
def calculate(x, y):
return x + y
sample test_one.py:
from app import main
def test_calculate():
assert(3 == main.calculate(1, 2))
running pytest:
$ pytest tests
=============================================================================== test session starts ===============================================================================
platform linux -- Python 3.7.2, pytest-4.6.2, py-1.8.0, pluggy-0.12.0 -- /home/gino/.virtualenvs/test-py37/bin/python3.7
cachedir: .pytest_cache
rootdir: /home/gino/Project, inifile: pytest.ini, testpaths: tests
collected 1 item
tests/test_one.py::test_calculate PASSED [1/1]
============================================================================ 1 passed in 0.01 seconds =============================================================================

No module error in Python 3.6 with Click library

I'm trying to build a CLI in python by using the package click. The Python version I'm using is 3.6
This is the main of my application:
import os
import click
cmd_folder = os.path.join(os.path.dirname(__file__), 'commands')
class IAMCLI(click.MultiCommand):
def list_commands(self, ctx):
rv = []
for filename in os.listdir(cmd_folder):
if filename.endswith('.py') and \
filename.startswith('cmd_'):
rv.append(filename[4:-3])
rv.sort()
return rv
def get_command(self, ctx, cmd_name):
ns = {}
fn = os.path.join(cmd_folder, 'cmd_{}.py'.format(cmd_name))
with open(fn) as f:
code = compile(f.read(), fn, 'exec')
eval(code, ns, ns)
return ns['cli']
#click.command(cls=IAMCLI)
#click.option('--env', default='dev', type=click.Choice(['dev', 'staging', 'production']),
help='AWS Environment')
#click.pass_context
def cli():
"""AWS IAM roles and policies management CLI."""
pass
if __name__ == '__main__':
cli()
and this is the tree:
├── cli
│   ├── __init__.py
│   ├── aws
│   │   ├── __init__.py
│   │   ├── policy.py
│   │   └── role.py
│   ├── cli.py
│   └── commands
│   ├── __init__.py
│   └── cmd_dump.py
the cmd_dump.py looks like this:
import click
from cli.aws.role import fetch_roles
#click.command('dump', short_help='Dump IAM resources')
#click.pass_context
def cli():
pass
the problem is that when I try to run python cli/cli.py --help this is what I get:
File "cli/commands/cmd_dump.py", line 3, in <module>
from cli.aws.role import fetch_roles
ModuleNotFoundError: No module named 'cli.aws'; 'cli' is not a package
Any idea about that?
I will try to give another answer based on my approach when starting development of a new python project. Do you plan to distribute your project, or maybe just share it with someone? If you do, what do you think - will this someone be happy with needing to remember the command
$ python path/to/project/codebase/cli/cli.py --help
to use your tool? Wouldn't it be easier for him to remember the command
$ cli --help
instead?
I suggest you to start with packaging of your project right away - write a minimal setup script:
from setuptools import setup, find_packages
setup(
name='mypkg',
version='0.1',
packages=find_packages(),
install_requires=['click'],
entry_points={
'console_scripts': ['cli=cli.cli:cli'],
},
)
You can always enhance your setup script when new requirements emerge. Place the setup script in the root directory of your codebase:
├── setup.py
├── cli
│   ├── __init__.py
│   ├── aws
...
Now run python setup.py develop or even better, pip install --editable=.1 from the codebase root directory (where the setup.pyscript is). You have installed your project in the development mode and now can invoke
$ cli --help
with all the imports being resolved correctly (which would resolve your issue). But you gain much more besides that - you gain a way to package your project to be ready to be distributed to your target users and a clean command line interface which your users will invoke the same way as you just did.
Now continue with the project development. If you change the code of the cli command, it will be applied on the fly so you don't need to reinstall the project each time you change anything.
Once you are ready with the project development and want to deliver it to your users, issue:
$ python setup.py bdist_wheel
This will package your project into a installable wheel file (you will need to install wheel package to be able to invoke the command: pip install wheel --user). Usually it will reside in the dist subdirectory of the codebase root dir. Give this file to the user. To install the file, he will issue
$ pip install Downloads/mypkg-0.1-py3-none.whl --user
and can start tp use your tool right away:
$ cli --help
This is a very much simplified description and there is a lot of stuff to learn, but there is also a ton of helpful materials to guide you through the process.
If you want to learn more about the topic: as a quickstart reference, I would recommend the excellent PyPA packaging guide. For packaging click commands, their own docs are more than sufficient.
I would encourage you to use pip for distribution and packaging development where applicable as it is a standard tool for that.
Do not run scripts inside packages! Packages are made to be imported in code but not to run scripts inside them. The rest is non related with import errors.
For example:
├── cli # package
│ ├── __init__.py
│ ├── aws
│ │ ├── __init__.py
│ │ ├── policy.py
│ │ └── role.py
│ ├── cli.py
│ │ └── commands
│ │ ├── __init__.py
│ │ └── cmd_dump.py
├── run_this_module.py
Module to execute run_this_module.py:
import cli
"""Do your code here"""
I have done this a hundred times. It is tempting to name a package the same thing at multiple levels, but resist! I tend to have __main__.py in my packages these days, which helps solve the problem that you are having.
I suspect you are having a namespace issue. Try renaming your package or the internal file called cli.py. Be sure to refactor any imports that are attempting to use these.
Also, change your function name to something besides cli. Same issue.
├── cli <- rename this to 'my_app' or something else
│ ├── __init__.py
│ ├── aws
│ │ ├── __init__.py
│ │ ├── policy.py
│ │ └── role.py
│ ├── cli.py <- or maybe it is easier to rename this
│ └── commands
│ ├── __init__.py
│ └── cmd_dump.py

Python project structure for selenium test

I'm a bit rusted with Python (2.7) modules & packages so I hope to find some help. I am writing some tests using selenium. I would like to organize the tests by "scenario families" and I'd like to implement a helper class to handle some boilerplate that would be my base class for all the tests.
Basically I'd like to have a structure looking like:
.
├── assets
│ └── ressource.ext
├── tests
│ ├── __init__.py
│ ├── user
│ │ ├── __init__.py
│ │ └── upload.py
│ └── visitor
│ ├── __init__.py
│ ├── homepage.py
│ ├── login.py
│ ├── search.py
│ └── signup.py
└── utils
├── __init__.py
└── base.py
I am completely stuck on how to make the utils.base module visible to e.g. the tests.visitor.signup module and still allow to run this specific test using python tests/visitor/signup.py.
How would I do that?
Thanks!
Edit: to make things easier here is a dummy sample of what I'm trying to do:
mkdir -p {utils,tests/user}
touch {utils,tests{,/user}}/__init__.py
echo -e "import unittest\n\nclass Base(unittest.TestCase):\n pass" > utils/base.py
echo -e "from utils.base import Base\n\nclass MyTest(Base):\n pass\n\nif __name__ == '__main__':\n unittest.main()" > tests/user/upload.py
This produces the following tree with empty __init__.py files and the import I'm trying to achieve in tests/user/upload.py:
dummy/
├── tests
│   ├── __init__.py
│   └── user
│   ├── __init__.py
│   └── upload.py
└── utils
├── __init__.py
└── base.py
Now if I am in the dummy/ folder I am getting this:
(env)dummy $ python tests/user/upload.py
Traceback (most recent call last):
File "tests/user/upload.py", line 1, in <module>
from utils.base import Base
ImportError: No module named utils.base
But if I run it interactively there is obviously no problem:
(env) dummy $ python
Python 2.7.6 (default, Jan 16 2014, 16:39:48)
>>> from utils.base import Base
>>>
My problem must be really silly but I can't see what I'm doing wrong. And what I don't get also is that specifying the python path ((env) dummy $ PYTHON_PATH=. python tests/user/upload.py) doesn't fix the issue.
You can set PYTHONPATH to search the current directory:
PYTHONPATH=. python tests/user/upload.py
Or you can use a test runner, like nose:
nosetests tests/user/upload.py
Or you can create your own test runner if you don't want to use nose. For instance a runtests.py file with:
import unittest
from tests.user.upload import *
unittest.main()
Then:
python runtests.py
This test runner could import more tests or could be selective about what tests it imports. Ultimately, I'd recommend using nose over writing your own test runner.
Starting a test runner at the top of your hierarchy works because Python adds the current directory of a script to the list of paths it searches for modules.

Automatically Generating Documentation for All Python Package Contents

I'm trying to auto-generate basic documentation for my codebase using Sphinx. However, I'm having difficulty instructing Sphinx to recursively scan my files.
I have a Python codebase with a folder structure like:
<workspace>
└── src
└── mypackage
├── __init__.py
│
├── subpackageA
│ ├── __init__.py
│ ├── submoduleA1
│ └── submoduleA2
│
└── subpackageB
├── __init__.py
├── submoduleB1
└── submoduleB2
I ran sphinx-quickstart in <workspace>, so now my structure looks like:
<workspace>
├── src
│ └── mypackage
│ ├── __init__.py
│ │
│ ├── subpackageA
│ │ ├── __init__.py
│ │ ├── submoduleA1
│ │ └── submoduleA2
│ │
│ └── subpackageB
│ ├── __init__.py
│ ├── submoduleB1
│ └── ubmoduleB2
│
├── index.rst
├── _build
├── _static
└── _templates
I've read the quickstart tutorial, and although I'm still trying to understand the docs, the way it's worded makes me concerned that Sphinx assumes I'm going to manually create documentation files for every single module/class/function in my codebase.
However, I did notice the "automodule" statement, and I enabled autodoc during quickstart, so I'm hoping most of the documentation can be automatically generated. I modified my conf.py to add my src folder to sys.path and then modified my index.rst to use automodule. So now my index.rst looks like:
Contents:
.. toctree::
:maxdepth: 2
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. automodule:: alphabuyer
:members:
I have dozens of classes and functions defined in the subpackages. Yet, when I run:
sphinx-build -b html . ./_build
it reports:
updating environment: 1 added, 0 changed, 0 removed
And this appears to have failed to import anything inside my package. Viewing the generated index.html shows nothing next to "Contents:". The Index page only shows "mypackage (module)", but clicking it shows it also has no contents.
How do you direct Sphinx to recursively parse a package and automatically generate documentation for every class/method/function it encounters, without having to manually list every class yourself?
You can try using sphinx-apidoc.
$ sphinx-apidoc --help
Usage: sphinx-apidoc [options] -o <output_path> <module_path> [exclude_paths, ...]
Look recursively in <module_path> for Python modules and packages and create
one reST file with automodule directives per package in the <output_path>.
You can mix sphinx-apidoc with sphinx-quickstart in order to create the whole doc project like this:
$ sphinx-apidoc -F -o docs project
This call will generate a full project with sphinx-quickstart and Look recursively in <module_path> (project) for Python modules.
Perhaps apigen.py can help: https://github.com/nipy/nipy/tree/master/tools.
This tool is described very briefly here: http://comments.gmane.org/gmane.comp.python.sphinx.devel/2912.
Or better yet, use pdoc.
Update: the sphinx-apidoc utility was added in Sphinx version 1.1.
Note
For Sphinx (actually, the Python interpreter that executes
Sphinx) to find your module, it must be importable. That means that
the module or the package must be in one of the directories on
sys.path – adapt your sys.path in the configuration file accordingly
So, go to your conf.py and add
import an_example_pypi_project.useful_1
import an_example_pypi_project.useful_2
Now your index.rst looks like:
.. toctree::
:glob:
example
an_example_pypi_project/*
and
make html
From Sphinx version 3.1 (June 2020), if you're happy to use sphinx.ext.autosummary to display summary tables, you can use the new :recursive: option to automatically detect every module in your package, however deeply nested, and automatically generate documentation for every attribute, class, function and exception in that module.
See my answer here: https://stackoverflow.com/a/62613202/12014259

Categories