I’m looking for solution for designing my program.
My program consists of 3 blocks:
Classes
Functions
Other utilities
I want to structure my program this way:
program_folder/
main.py
classes_folder/
class_1.py
class_2.py
functions_folder/
set_of_func_1.py
set_of_func_1.py
utilities_folder/
set_of_utilities_1.py
set_of_utilities_1.py
I want to:
any scripts in «classes_folder» were able to import any of scripts in
«functions_folder».
any scripts in «functions_folder» were able
to import any of scripts in «utilities_folder».
all scripts were
normally used by main.py.
all scripts in «classes_folder»,
«functions_folder» and «utilities_folder» could be tested when worked
as «main» (if __name__ == “__main__”: some tests)
«program_folder»
could be in any place in my computer (there shouldn’t be dependency
on exact path to «program_folder»).
From all the above I thought I have to:
Change import search path for all scripts in «classes_folder»,
«functions_folder» and «utilities_folder».
Set current working
directory to «program_folder» for all scripts?
Is there a way I can do it?
Does my idea look good or have I put there some unexpected problems?
You can create a skeleton project like the following:
/path/to/project/
setup.py
my_project/
__init__.py
a/
__init__.py
b/
__init__.py
==> ./my_project/__init__.py <==
print('my_project/__init__.py')
==> ./my_project/a/__init__.py <==
import my_project
print('my_project/a/__init__.py')
==> ./my_project/b/__init__.py <==
import my_project.a
print('my_project/b/__init__.py')
==> ./setup.py <==
from distutils.core import setup
setup(name='my_project',
version='1.0',
description='my_project',
author='author',
packages=['my_project'])
Then you can install the project locally using pip install -e /path/to/project/ (the project folder is not copied, just gets registered; there's a dependency on the exact path, but this dependency is not hard-coded in project files themselves).
As the result, import my_project, import my_project.a etc. do that they mean:
$ python my_project/b/__init__.py
my_project/__init__.py
my_project/a/__init__.py
my_project/b/__init__.py
A common Python project structure could look like this:
project_name/
setup.py
requirements.txt
project_name/
__main__.py
classes/
__init__.py
class1.py
class2.py
functions/
__init__.py
functions.py
utils/
__init__.py
utils.py
Then, you could modify your imports from absolute to relative and run your package using something like:
$ /path/to/project_name> python -m project_name
Note that setup.py is only required if you want to install your package under some of your interpreters.
Note: see comments below also
I have a medium sized python command line program that runns well from my source code, and I've created a source distribution file and installed it into the virtual environment using "python setup.py install"
Since this is a pure Python program, and provided that the end users have installed Python, and the required packages, my idea is that i can distribute it through PyPi for all available platforms as a source distribution.
Upon install, I get an 'appname' directory within the virtualenv site-packages directory, and it also runs correctly when I write "python 'pathtovirtualenv'/Lib/sitepackages/'myappname'
But is this the way the end user is supposed to run distutils-distributed programs from the command line.
I fnd a lot of information on how to distribute a program using distutils, but not on how the end user is supposed to launch it after installing it.
Since you already created a setup.py, I would recommend looking at the entry_points:
entry_points={
'console_scripts': [
'scriptname=yourpackage.module:function',
],
},
Here, you have a package named yourpackage, and a module named module in it, and you refer to the function function. This function will wrapped by the script called scriptname, which will be installed in the users bin folder, which is normally in the $PATH, so the user can simply type scriptname after he installed your package via pip install.
To sum up: a user will install the package via pip install yourpackage and finally be able to call the function in module via script name.
Here are some docs on this topic:
https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation
http://www.scotttorborg.com/python-packaging/command-line-scripts.html
Well, I eventually figured it out.
Initially, I wanted to just use distutils, I like it when the end user can install it with minimum of extra dependencies. But I have now discovered that setuptools is the better option in my case.
My directory structure looks like this (Subversion):
trunk
|-- appname
| |-- __init__.py # an empty file
| |-- __main__.py # calls appname.main()
| |-- appname.py # contains a main() and imports moduleN
| |-- module1.py
| |-- module2.py
| |-- ...
|-- docs
| |-- README
| |-- LICENSE
| |-- ...
|-- setup.py
And my setyp.py basically looks like this:
# This setup file is to be used with setuptools source distribution
# Run "python setup sdist to deploy
from setuptools import setup, find_packages
setup( name = "appname",
...
include_package_data = True,
packages = find_packages(),
zip_safe = True,
entry_points = {
'console_scripts' : 'appname=appname.appname:main'
}
)
The next step now is to figure out how to install the contents of the docs directory on the users computer.
But right now, I'm thinking about adding --readme, --license, --changes, --sample (and so forth) options to the main script, to display them at run time.
The source for the package is here
I'm installing the package from the index via:
easy_install hackertray
pip install hackertray
easy_install installs images/hacker-tray.png to the following folder:
/usr/local/lib/python2.7/dist-packages/hackertray-1.8-py2.7.egg/images/
While, pip installs it to:
/usr/local/images/
My setup.py is as follows:
from setuptools import setup
setup(name='hackertray',
version='1.8',
description='Hacker News app that sits in your System Tray',
packages=['hackertray'],
data_files=[('images', ['images/hacker-tray.png'])])
My MANIFEST file is:
include images/hacker-tray.png
Don't use data_files with relative paths. Actually, don't use data_files at all, unless you make sure the target paths are absolute ones properly generated in a cross-platform way insted of hard coded values.
Use package_data instead:
setup(
# (...)
package_data={
"hackertray.data": [
"hacker-tray.png",
],
},
)
where hackertray.data is a proper python package (i.e. is a directory that contains a file named __init__.py) and hacker-tray.png is right next to __init__.py.
Here's how it should look:
.
|-- hackertray
| |-- __init__.py
| `-- data
| |-- __init__.py
| `-- hacker-tray.png
`-- setup.py
You can get the full path to the image file using:
from pkg_resources import resource_filename
print os.path.abspath(resource_filename('hackertray.data', 'hacker-tray.png'))
I hope that helps.
PS: Python<2.7 seems to have a bug regarding packaging of the files listed in package_data. Always make sure to have a manifest file if you're using something older than Python 2.7 for packaging. See here for more info: https://groups.google.com/d/msg/python-virtualenv/v5KJ78LP9Mo/OiBqMcYVFYAJ
I have a small python application that I would like to make into a downloadable / installable executable for UNIX-like systems. I am under the impression that setuptools would be the best way to make this happen but somehow this doesn't seem to be a common task.
My directory structure looks like this:
myappname/
|-- setup.py
|-- myappname/
| |-- __init__.py
| |-- myappname.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
| |-- morepython/
| |-- __init__.py
| |-- extrapython1.py
| |-- extrapython2.py
The file which contains if __name__ == "__main__": is myappname.py. This file has a line at the top, import src.mainclassfile.
When this is downloaded, I would like for a user to be able to do something like:
$ python setup.py build
$ python setup.py install
And then it will be an installed executable which they can invoke from anywhere on the command line with:
$ myappname arg1 arg2
The important parts of my setup.py are like:
from setuptools import setup, find_packages
setup(
name='code2flow',
scripts=['myappname/myappname.py'],
package_dir={'myappname': 'myappname'},
packages=find_packages(),
)
Current state
By running:
$ sudo python setup.py install
And then in a new shell:
$ myapp.py
I am getting a No module named error
The problem here is that your package layout is broken.
It happens to work in-place, at least in 2.x. Why? You're not accessing the package as myappname—but the same directory that is that package's directory is also the top-level script directory, so you end up getting any of its siblings via old-style relative import.
Once you install things, of course, you'll end up with the myappname package installed in your site-packages, and then a copy of myappname.py installed somewhere on your PATH, so relative import can't possibly work.
The right way to do this is to put your top-level scripts outside the package (or, ideally, into a bin directory).
Also, your module and your script shouldn't have the same name. (There are ways you can make that work, but… just don't try it.)
So, for example:
myappname/
|-- setup.py
|-- myscriptname.py
|-- myappname/
| |-- __init__.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
Of course so far, all this makes it do is break in in-place mode the exact same way it breaks when installed. But at least that makes things easier to debug, right?
Anyway, your myscriptname.py then has to use an absolute import:
import myappname.src.mainclassfile
And your setup.py has to find the script in the right place:
scripts=['myscriptname.py'],
Finally, if you need some code from myscriptname.py to be accessible inside the module, as well as in the script, the right thing to do is to refactor it into two files—but if that's too difficult for some reason, you can always write a wrapper script.
See Arranging your file and directory structure and related sections in the Hitchhiker's Guide to Packaging for more details.
Also see PEP 328 for details on absolute vs. relative imports (but keep in mind that when it refers to "up to Python 2.5" it really means "up to 2.7", and "starting in 2.6" means "starting in 3.0".
For a few examples of packages that include scripts that get installed this way via setup.py (and, usually, easy_install and pip), see ipython, bpython, modulegraph, py2app, and of course easy_install and pip themselves.
The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don't have to (done in the TestLoader class).
For example for a directory structure like this:
new_project
├── antigravity.py
└── test_antigravity.py
You can just run:
$ cd new_project
$ python -m unittest test_antigravity
For a directory structure like yours:
new_project
├── antigravity
│ ├── __init__.py # make it a package
│ └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
And in the test modules inside the test package, you can import the antigravity package and its modules as usual:
# import the package
import antigravity
# import the antigravity module
from antigravity import antigravity
# or an object inside the antigravity module
from antigravity.antigravity import my_object
Running a single test module:
To run a single test module, in this case test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Just reference the test module the same way you import it.
Running a single test case or test method:
Also you can run a single TestCase or a single test method:
$ python -m unittest test.test_antigravity.GravityTestCase
$ python -m unittest test.test_antigravity.GravityTestCase.test_method
Running all tests:
You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag):
$ cd new_project
$ python -m unittest discover
$ # Also works without discover for Python 3
$ # as suggested by #Burrito in the comments
$ python -m unittest
This will run all the test*.py modules inside the test package.
The simplest solution for your users is to provide an executable script (runtests.py or some such) which bootstraps the necessary test environment, including, if needed, adding your root project directory to sys.path temporarily. This doesn't require users to set environment variables, something like this works fine in a bootstrap script:
import sys, os
sys.path.insert(0, os.path.dirname(__file__))
Then your instructions to your users can be as simple as "python runtests.py".
Of course, if the path you need really is os.path.dirname(__file__), then you don't need to add it to sys.path at all; Python always puts the directory of the currently running script at the beginning of sys.path, so depending on your directory structure, just locating your runtests.py at the right place might be all that's needed.
Also, the unittest module in Python 2.7+ (which is backported as unittest2 for Python 2.6 and earlier) now has test discovery built-in, so nose is no longer necessary if you want automated test discovery: your user instructions can be as simple as python -m unittest discover.
I've had the same problem for a long time. What I recently chose is the following directory structure:
project_path
├── Makefile
├── src
│ ├── script_1.py
│ ├── script_2.py
│ └── script_3.py
└── tests
├── __init__.py
├── test_script_1.py
├── test_script_2.py
└── test_script_3.py
and in the __init__.py script of the test folder, I write the following:
import os
import sys
PROJECT_PATH = os.getcwd()
SOURCE_PATH = os.path.join(
PROJECT_PATH,"src"
)
sys.path.append(SOURCE_PATH)
Super important for sharing the project is the Makefile, because it enforces running the scripts properly. Here is the command that I put in the Makefile:
run_tests:
python -m unittest discover .
The Makefile is important not just because of the command it runs but also because of where it runs it from. If you would cd in tests and do python -m unittest discover ., it wouldn't work because the init script in unit_tests calls os.getcwd(), which would then point to the incorrect absolute path (that would be appended to sys.path and you would be missing your source folder). The scripts would run since discover finds all the tests, but they wouldn't run properly. So the Makefile is there to avoid having to remember this issue.
I really like this approach because I don't have to touch my src folder, my unit tests or my environment variables and everything runs smoothly.
I generally create a "run tests" script in the project directory (the one that is common to both the source directory and test) that loads my "All Tests" suite. This is usually boilerplate code, so I can reuse it from project to project.
run_tests.py:
import unittest
import test.all_tests
testSuite = test.all_tests.create_test_suite()
text_runner = unittest.TextTestRunner().run(testSuite)
test/all_tests.py (from How do I run all Python unit tests in a directory?)
import glob
import unittest
def create_test_suite():
test_file_strings = glob.glob('test/test_*.py')
module_strings = ['test.'+str[5:len(str)-3] for str in test_file_strings]
suites = [unittest.defaultTestLoader.loadTestsFromName(name) \
for name in module_strings]
testSuite = unittest.TestSuite(suites)
return testSuite
With this setup, you can indeed just include antigravity in your test modules. The downside is you would need more support code to execute a particular test... I just run them all every time.
From the article you linked to:
Create a test_modulename.py file and
put your unittest tests in it. Since
the test modules are in a separate
directory from your code, you may need
to add your module’s parent directory
to your PYTHONPATH in order to run
them:
$ cd /path/to/googlemaps
$ export PYTHONPATH=$PYTHONPATH:/path/to/googlemaps/googlemaps
$ python test/test_googlemaps.py
Finally, there is one more popular
unit testing framework for Python
(it’s that important!), nose. nose
helps simplify and extend the builtin
unittest framework (it can, for
example, automagically find your test
code and setup your PYTHONPATH for
you), but it is not included with the
standard Python distribution.
Perhaps you should look at nose as it suggests?
I had the same problem, with a separate unit tests folder. From the mentioned suggestions I add the absolute source path to sys.path.
The benefit of the following solution is, that one can run the file test/test_yourmodule.py without changing at first into the test-directory:
import sys, os
testdir = os.path.dirname(__file__)
srcdir = '../antigravity'
sys.path.insert(0, os.path.abspath(os.path.join(testdir, srcdir)))
import antigravity
import unittest
I noticed that if you run the unittest command line interface from your "src" directory, then imports work correctly without modification.
python -m unittest discover -s ../test
If you want to put that in a batch file in your project directory, you can do this:
setlocal & cd src & python -m unittest discover -s ../test
Solution/Example for Python unittest module
Given the following project structure:
ProjectName
├── project_name
| ├── models
| | └── thing_1.py
| └── __main__.py
└── test
├── models
| └── test_thing_1.py
└── __main__.py
You can run your project from the root directory with python project_name, which calls ProjectName/project_name/__main__.py.
To run your tests with python test, effectively running ProjectName/test/__main__.py, you need to do the following:
1) Turn your test/models directory into a package by adding a __init__.py file. This makes the test cases within the sub directory accessible from the parent test directory.
# ProjectName/test/models/__init__.py
from .test_thing_1 import Thing1TestCase
2) Modify your system path in test/__main__.py to include the project_name directory.
# ProjectName/test/__main__.py
import sys
import unittest
sys.path.append('../project_name')
loader = unittest.TestLoader()
testSuite = loader.discover('test')
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)
Now you can successfully import things from project_name in your tests.
# ProjectName/test/models/test_thing_1.py
import unittest
from project_name.models import Thing1 # this doesn't work without 'sys.path.append' per step 2 above
class Thing1TestCase(unittest.TestCase):
def test_thing_1_init(self):
thing_id = 'ABC'
thing1 = Thing1(thing_id)
self.assertEqual(thing_id, thing.id)
if you run "python setup.py develop" then the package will be in the path. But you may not want to do that because you could infect your system python installation, which is why tools like virtualenv and buildout exist.
If you use VS Code and your tests are located on the same level as your project then running and debug your code doesn't work out of the box. What you can do is change your launch.json file:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python",
"type": "python",
"request": "launch",
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"program": "${file}",
"cwd": "${workspaceRoot}",
"env": {},
"envFile": "${workspaceRoot}/.env",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
]
}
]
}
The key line here is envFile
"envFile": "${workspaceRoot}/.env",
In the root of your project add .env file
Inside of your .env file add path to the root of your project. This will temporarily add
PYTHONPATH=C:\YOUR\PYTHON\PROJECT\ROOT_DIRECTORY
path to your project and you will be able to use debug unit tests from VS Code
Use setup.py develop to make your working directory be part of the installed Python environment, then run the tests.
Python 3+
Adding to #Pierre
Using unittest directory structure like this:
new_project
├── antigravity
│ ├── __init__.py # make it a package
│ └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
To run the test module test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Or a single TestCase
$ python -m unittest test.test_antigravity.GravityTestCase
Mandatory don't forget the __init__.py even if empty otherwise will not work.
You can't import from the parent directory without some voodoo. Here's yet another way that works with at least Python 3.6.
First, have a file test/context.py with the following content:
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
Then have the following import in the file test/test_antigravity.py:
import unittest
try:
import context
except ModuleNotFoundError:
import test.context
import antigravity
Note that the reason for this try-except clause is that
import test.context fails when run with "python test_antigravity.py" and
import context fails when run with "python -m unittest" from the new_project directory.
With this trickery they both work.
Now you can run all the test files within test directory with:
$ pwd
/projects/new_project
$ python -m unittest
or run an individual test file with:
$ cd test
$ python test_antigravity
Ok, it's not much prettier than having the content of context.py within test_antigravity.py, but maybe a little. Suggestions are welcome.
It's possible to use wrapper which runs selected or all tests.
For instance:
./run_tests antigravity/*.py
or to run all tests recursively use globbing (tests/**/*.py) (enable by shopt -s globstar).
The wrapper can basically use argparse to parse the arguments like:
parser = argparse.ArgumentParser()
parser.add_argument('files', nargs='*')
Then load all the tests:
for filename in args.files:
exec(open(filename).read())
then add them into your test suite (using inspect):
alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj) and name.startswith("FooTest"):
alltests.addTest(unittest.makeSuite(obj))
and run them:
result = unittest.TextTestRunner(verbosity=2).run(alltests)
Check this example for more details.
See also: How to run all Python unit tests in a directory?
Following is my project structure:
ProjectFolder:
- project:
- __init__.py
- item.py
- tests:
- test_item.py
I found it better to import in the setUp() method:
import unittest
import sys
class ItemTest(unittest.TestCase):
def setUp(self):
sys.path.insert(0, "../project")
from project import item
# further setup using this import
def test_item_props(self):
# do my assertions
if __name__ == "__main__":
unittest.main()
What's the usual way of actually running the tests
I use Python 3.6.2
cd new_project
pytest test/test_antigravity.py
To install pytest: sudo pip install pytest
I didn't set any path variable and my imports are not failing with the same "test" project structure.
I commented out this stuff: if __name__ == '__main__' like this:
test_antigravity.py
import antigravity
class TestAntigravity(unittest.TestCase):
def test_something(self):
# ... test stuff here
# if __name__ == '__main__':
#
# if __package__ is None:
#
# import something
# sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
# from .. import antigravity
#
# else:
#
# from .. import antigravity
#
# unittest.main()
You should really use the pip tool.
Use pip install -e . to install your package in development mode. This is a very good practice, recommended by pytest (see their good practices documentation, where you can also find two project layouts to follow).
If you have multiple directories in your test directory, then you have to add to each directory an __init__.py file.
/home/johndoe/snakeoil
└── test
├── __init__.py
└── frontend
└── __init__.py
└── test_foo.py
└── backend
└── __init__.py
└── test_bar.py
Then to run every test at once, run:
python -m unittest discover -s /home/johndoe/snakeoil/test -t /home/johndoe/snakeoil
Source: python -m unittest -h
-s START, --start-directory START
Directory to start discovery ('.' default)
-t TOP, --top-level-directory TOP
Top level directory of project (defaults to start
directory)
This BASH script will execute the python unittest test directory from anywhere in the file system, no matter what working directory you are in.
This is useful when staying in the ./src or ./example working directory and you need a quick unit test:
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
python -m unittest discover -s "$readlink"/test -v
No need for a test/__init__.py file to burden your package/memory-overhead during production.
This way will let you run the test scripts from wherever you want without messing around with system variables from the command line.
This adds the main project folder to the python path, with the location found relative to the script itself, not relative to the current working directory.
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
Add that to the top of all your test scripts. That will add the main project folder to the system path, so any module imports that work from there will now work. And it doesn't matter where you run the tests from.
You can obviously change the project_path_hack file to match your main project folder location.
A simple solution for *nix based systems (macOS, Linux); and probably also Git bash on Windows.
PYTHONPATH=$PWD python test/test_antigravity.py
print statement easily works, unlike pytest test/test_antigravity.py. A perfect way for "scripts", but not really for unittesting.
Of course, I want to do a proper automated testing, I would consider pytest with appropriate settings.
With cwd being the root project dir (new_project in your case), you can run the following command without __init__.py in any directory:
python -m unittest discover -s test
But you need import in test_antigravity.py as:
from antigravity import antigravity.your_object
instead of:
import antigravity.your_object
If you don't like from antigravity clause, you might like Alan L's answer.
If you are looking for a command line-only solution:
Based on the following directory structure (generalized with a dedicated source directory):
new_project/
src/
antigravity.py
test/
test_antigravity.py
Windows: (in new_project)
$ set PYTHONPATH=%PYTHONPATH%;%cd%\src
$ python -m unittest discover -s test
See this question if you want to use this in a batch for-loop.
Linux: (in new_project)
$ export PYTHONPATH=$PYTHONPATH:$(pwd)/src [I think - please edit this answer if you are a Linux user and you know this]
$ python -m unittest discover -s test
With this approach, it is also possible to add more directories to the PYTHONPATH if necessary.
unittest in your project have setup.py file. try:
python3 setup.py build
and
python3 setup.py develop --user
do the work of config paths an so on. try it!