pytest get the file path of current test file - python

I can't seem to get the file path of the current file being tested in pytest.
For instance consider something like the following directory structure:
devops_scripts
├── devops_utilities
│   ├── backupMonthly.py
│   ├── generateAnsibleINI.py
│   └── tests
│   └── test_backup.txt
├── monitoring
│   ├── analyizeCatalinaLog.py
│   ├── fetchCatalinaLogs.py
│   └── getaccountcredit.py
├── pi_provisioning
│   ├── piImageMake.sh
│   ├── piImageRead.sh
│   └── piImageSquashSd.py
└── script_utilities
   ├── README.md
   ├── __init__.py
   ├── execute.py
   ├── path_handling.py
   ├── sql_handling.py
   └── tests
   ├── sourcedirfortests
   │   ├── file1.txt
   │   └── file2.txt
   ├── test_ansible.txt
   └── test_execute.txt
If I run pytest from the top level it will descend to testing test_execute.txt. When test_execute.txt is under testing how do I get the file path to it? (I can get the rootdir through os.path.abspath('.') but that is not what I need)
(I need to be able to set these paths in order to test some execution things on file1.txt and file2.txt. I also need this to work no matter how deeply nested the various things I am trying to test.) I am not so interested in setting up specific temp testing directories and taking them down, etc. I just need the path of the file which is being tested.
I have tried things like:
>>> print os.path.abspath(__file__)
But that just yields: UNEXPECTED EXCEPTION: NameError("name '__file__' is not defined",)
I am not above even accessing some internal functions / objects in py.test
(I should add that I need this to work in Python 2.7 and Python 3.x)

Pytest sets PYTEST_CURRENT_TEST env var for the current running test. Not only it has current file info, it has info of current collected test id too (like test name, params etc.).
import os
def test_current():
print(os.getenv('PYTEST_CURRENT_TEST'))
You must use -s flag for pytest run if you want to see the printed text.
From Reference Doc:
During the test session pytest will set PYTEST_CURRENT_TEST to the current test nodeid and the current stage, which can be setup, call and teardown.

Related

Python packaging - Proper usage of the tests folder with unittest

I'm experimenting with packaging some Pythons projects and have followed this guide. The anonymized file tree can be seen below. The toml file is a barebones one from the tutorial modified appropriately. Building and uploading works well. So far so good.
.
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── mymodule
│      ├── __init__.py
│      └── main.py
└── tests
My next intended step is to package an older smaller well behaving project which includes a test suite written with unittest. Simplified structure below.
.
├── mymodule
│   ├── submoduleA
│   │   ├── __init__.py
│   │ └ foo.py
│   ├── submoduleB
│   │   ├── __init__.py
│   │ └ bar.py
│ ├── baz.py
│   └── __init__.py
└── tests
├── test_submoduleA.py
└── test_submoduleB.py
This is where my progress grinds to a halt.
There are many different ways to skin a cat but none directly involves unittest as far as I can tell. I have opted to go ahead by using tox to call the former.
Similarly when I have a look at different Python project repos the structure under tests seem to differ a bit.
End intent/wish: Convert said older project to a packagable one, editing the tests as little as possible, and using the tests for testing while developing and to do basic tests on the target device later.
Questions:
What is the purpose of the tests folder? Eg to run tests while developing files in src, to test the built package and/or to verify a package works once installed?
Is it possible to use the pyproject.toml file with unittest?

Importing a sub-module from the parent package

I have the following package structure:
.
├── README.md
├── common
│   ├── __init__.py
│   ├── analysis
│   │   ├── __init__.py
│   │   └── base_analysis.py
│   ├── logger
│   ├── __init__.py
│   └── logger.py
└── scripts
└── test_analysis
└── run.py
I would like to access logger in base_analysis.py. If I do this:
from ..logger import Logger
I am getting this error:
ValueError: attempted relative import beyond top-level package
How to import a sub-package from the parent package?
Note: I am running the script from scripts/test_analysis using:
python run.py
following changes to the calling python run.py script fixed it;
from logger.logger import Logger

Versioning multiple projects with versioneer within a single git repository

I have a single, large git repo with many different projects (not submodules). A few of these projects are Python projects for which I'd like to track versioning with Python's versioneer, others may be completely independent projects (say, in Haskell). i.e. The directory structure looks like this:
myrepository
├── .git
├── README.md
├── project1/
│   ├── project1/
│   │ ├── __init__.py
│   │ └── _version.py
│   ├── versioneer.py
│   ├── setup.cfg
│   └── setup.py
├── project2/
├── project3/
└── project4/
   ├── project4/
   │ ├── __init__.py
   │ └── _version.py
   ├── versioneer.py
   ├── setup.cfg
   └── setup.py
This doesn't play well with versioneer because it can't discover the .git directory at the project root level, so I get a version of 0+unknown.
Questions:
Is there a suggested way to use versioneer with a single monolithic repo with multiple projects?
Depending on the answer to the above, is it recommended that my git tags read like: project1-2.1.0, project2-1.3.1, or should I unify the git tags like: 1.2.1, 1.2.2?

Module-level py.test fixtures that don't run on every submodule

Say I have a directory structure like this
$ tree
.
├── A
│   ├── __init__.py
│   ├── conftest.py
│   ├── test_set_1.py
│   └── test_set_2.py
├── B
│   ├── __init__.py
│   ├── conftest.py
│   ├── test_set_1.py
│   └── test_set_2.py
├── C
│   ├── __init__.py
│   ├── conftest.py
│   ├── test_set_1.py
│   └── test_set_2.py
├── conftest.py
├── pytest.ini
I want to define a pytest fixture that I can specify to run once for a particular top-level module (A, B, or C), but not once for every submodule (test_set_1.py, test_set_2.py, etc.)
How can I do this? Can I inject these dependencies inside the __init__.py files somehow?
After a few hours I figured out a working answer: the scope="session" argument for a fixture applies to all submodules. So if A/conftest.py has a session-level fixture in it, it's only run once for all the tests in that submodule, but not run for tests in other submodules.

Pip install ignores files in MANIFEST.in - how to structure the project correctly?

I read a lot about this problem but could not find any solution, so I'll ask yet another question about it, since I'm not even sure if I use the correct folder structure for my Python package.
So basically I'm developing an application which uses the Tornado web-server framework and I want to package it, so the users can install it via pip and get access to a basic script to start the web server.
The directory structure is the following:
├── MANIFEST.in
├── README.md
├── config
│   └── default.cfg
├── docs
│   ├── Makefile
│   ├── _build
│   ├── _static
│   ├── _templates
│   ├── conf.py
│   ├── index.rst
├── foopackage
│   ├── __init__.py
│   ├── barmodule.py
│   └── bazmodule.py
├── setup.py
├── static
│   ├── css
│   │   ├── menu.css
│   │   └── main.css
│   ├── img
│   │   └── logo.png
│   ├── js
│   │   ├── ui.js
│   │   └── navigation.js
│   └── lib
│      ├── d3.v3.min.js
│      └── jquery-1.11.0.min.js
└── templates
├── index.html
└── whatever.html
The Python code is as you can see in the package foopackage.
The MANIFEST.in file recursively includes the directories config, static, templates, and docs.
This is my setup.py (only the relevant parts:
from setuptools import setup
setup(name='foo',
version='0.1.0',
packages=['foopackage'],
include_package_data=True,
install_requires=[
'tornado>=3.2.2',
],
entry_points={
'console_scripts': [
'foo=foopackage.barmodule:main',
],
},
)
If I run python setup.py sdist, everything gets packaged nicely, the docs, templates and config files etc. are included. However, if I run pip install ..., only the foopackage gets installed and everything else is ignored.
How do I include those additional files to the install procedure? Is my directory structure OK? I also read about "faking a package", so putting everything in a directory and touch a __init__.py file, but that seems pretty odd to me :-\
I solved the problem by moving the static directory to the actual Python module directory (foopackage). It seems that top level "non-package" folders are ignored otherwise.

Categories