I have a medium sized python command line program that runns well from my source code, and I've created a source distribution file and installed it into the virtual environment using "python setup.py install"
Since this is a pure Python program, and provided that the end users have installed Python, and the required packages, my idea is that i can distribute it through PyPi for all available platforms as a source distribution.
Upon install, I get an 'appname' directory within the virtualenv site-packages directory, and it also runs correctly when I write "python 'pathtovirtualenv'/Lib/sitepackages/'myappname'
But is this the way the end user is supposed to run distutils-distributed programs from the command line.
I fnd a lot of information on how to distribute a program using distutils, but not on how the end user is supposed to launch it after installing it.
Since you already created a setup.py, I would recommend looking at the entry_points:
entry_points={
'console_scripts': [
'scriptname=yourpackage.module:function',
],
},
Here, you have a package named yourpackage, and a module named module in it, and you refer to the function function. This function will wrapped by the script called scriptname, which will be installed in the users bin folder, which is normally in the $PATH, so the user can simply type scriptname after he installed your package via pip install.
To sum up: a user will install the package via pip install yourpackage and finally be able to call the function in module via script name.
Here are some docs on this topic:
https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation
http://www.scotttorborg.com/python-packaging/command-line-scripts.html
Well, I eventually figured it out.
Initially, I wanted to just use distutils, I like it when the end user can install it with minimum of extra dependencies. But I have now discovered that setuptools is the better option in my case.
My directory structure looks like this (Subversion):
trunk
|-- appname
| |-- __init__.py # an empty file
| |-- __main__.py # calls appname.main()
| |-- appname.py # contains a main() and imports moduleN
| |-- module1.py
| |-- module2.py
| |-- ...
|-- docs
| |-- README
| |-- LICENSE
| |-- ...
|-- setup.py
And my setyp.py basically looks like this:
# This setup file is to be used with setuptools source distribution
# Run "python setup sdist to deploy
from setuptools import setup, find_packages
setup( name = "appname",
...
include_package_data = True,
packages = find_packages(),
zip_safe = True,
entry_points = {
'console_scripts' : 'appname=appname.appname:main'
}
)
The next step now is to figure out how to install the contents of the docs directory on the users computer.
But right now, I'm thinking about adding --readme, --license, --changes, --sample (and so forth) options to the main script, to display them at run time.
Related
I'm pip-installing my module like so:
cd my_working_dir
pip install -e .
When I later import the module from within Python, can I somehow detect if the module is installed in this editable mode?
Right now, I'm just checking if there's a .git folder in os.path.dirname(mymodule.__file__)) which, well, only works if there's actually a .git folder there. Is there a more reliable way?
Another workaround:
Place an "not to install" file into your package. This can be a README.md, or a not_to_install.txt file. Use any non-pythonic extension, to prevent that file installation. Then check if that file exists in your package.
The suggested source structure:
my_repo
|-- setup.py
`-- awesome_package
|-- __init__.py
|-- not_to_install.txt
`-- awesome_module.py
setup.py:
# setup.py
from setuptools import setup, find_packages
setup(
name='awesome_package',
version='1.0.0',
# find_packages() will ignore non-python files.
packages=find_packages(),
)
The __init__.py or the awesome_module.py:
import os
# The current directory
__here__ = os.path.dirname(os.path.realpath(__file__))
# Place the following function into __init__.py or into awesome_module.py
def check_editable_installation():
'''
Returns true if the package was installed with the editable flag.
'''
not_to_install_exists = os.path.isfile(os.path.join(__here__, 'not_to_install.txt'))
return not_to_install_exists
Run pip list, and there is a column called "Editable project location". If that column has a value, specifically the directory from which you installed it, then the package is pip installed in editable mode.
I don't know of a way to detect this directly (e.g. ask setuptools).
You could try to detect that you package can not be reached through the paths in sys.path. But that's tedious. It's also not bullet proof -- what if it can be reached through sys.path but it's also installed as en editable package?
The best option is to look at the artifacts an editable install leaves in your site-packages folder. There's a file called my_package.egg-link there.
from pathlib import Path
# get site packages folder through some other magic
# assuming this current file is located in the root of your package
current_package_root = str(Path(__file__).parent.parent)
installed_as_editable = False
egg_link_file = Path(site_packages_folder) / "my_package.egg-link"
try:
linked_folder = egg_link_file.read_text()
installed_as_editable = current_package_root in linked_folder
except FileNotFoundError:
installed_as_editable = False
Note: to make this a bit more bullet-proof, read only the first line of the egg-link file and parse it using Path() as well to account for proper slashes etc.
Recently I had to test if various packages were installed in editable mode across different machines. Running pip show <package name> reveals not only the version, but other information about it, which includes the location of the source code. If the package was not installed in editable mode, this location will point to site-packages, so for my case it was sufficient with checking the output of such command:
import subprocess
def check_if_editable(name_of_the_package:str) -> bool:
out = subprocess.check_output(["pip", "show", f"{name_of_the_package}"]).decode()
return "site-packages" in out
Haven't found a definite source for this, but if the output of
$ pip show my-package -f
is something like this:
..
..
Files:
__editable__.my-package-0.0.5.pth
my-package-0.0.5.dist-info/INSTALLER
my-package-0.0.5.dist-info/METADATA
my-package-0.0.5.dist-info/RECORD
my-package-0.0.5.dist-info/REQUESTED
my-package-0.0.5.dist-info/WHEEL
my-package-0.0.5.dist-info/direct_url.json
my-package-0.0.5.dist-info/top_level.txt
then it's probably editable.
I have a package with the following layout:
tiny-py-interpreter/
|-- setup.py
|-- .coveragerc
|-- tinypy/
| -- foo/
| -- bar/
| -- tests/
| -- tinypyapp.py
| -- run_tests.py
Here are some lines from the setup.py:
setup(
...
packages=find_packages(),
entry_points = {
'console_scripts' : [ 'tinypy = tinypy.tinypyapp:main']
},
test_suite = 'tinypy.run_tests.get_suite',
)
After installing the package a console script called tinypy is installed:
pip3 install .
Then I run coverage:
coverage run setup.py test
coverage combine
coverage report
All the tests I have are implemented in a such way that each test launches a subprocess of the tinypy, so I use parallel = True in .coveragerc to capture the result of the coverage run.
Essentially, I have the same layout as coverage itself, where coverage and cmdline.py are the same things as tinypy and tinypyapp.py in my case.
The problem: when console script tinypy (tinypyapp.py) is executed, it uses an installed version of the tinypy package, which is located in /usr/local/lib/python3.5/site-packages/tinypy/. Coverage ignores sources in ./tinypy (as they're not used). If source parameter is omitted, that it's possible to see coverage for the code from site-packages/tinypy.
The question: how to structure the project properly? When tinypyapp.py is installed as a script, it is installed on the same level as tinypy folder (one level higher, not inside). I can't place tinypyapp.py outside the tinypy folder, as then setup.py can't find it. Because of this I can't use tinypyapp.py and have to use name of the script (which is tinypy).
I think the easiest thing would be to not install the code you are working on into site-packages, but instead to use a development install:
pip install -e .
My current workaround is to create test_entry_point.py in the root folder:
tiny-py-interpreter/
|-- setup.py
|-- test_entry_point.py
|-- ...
With the following content:
import sys
from tinypy.tinypyapp import main
if __name__ == '__main__':
main()
And use the following filename when launching a subprocess in test:
tinypy_binary = sys.executable + ' ' + os.getcwd() + '/test_entryp.py'
subprocess.run(tinypy_binary, ...)
My project in python has many scripts in many files. General structure is
:
Project/
|-- bin/
|-- project
|--calculations
|--some scripts
|--mainApp
|--some scripts
|--interpolations
|--some scripts
|--more files
|--other scripts
|
|-- tests
|-- setup.py
|-- README
I have many imports like this
import bin.project.mainApp.MainAppFrame
My setup.py file is
from setuptools import setup, find_packages
setup(
name = 'Application to orifices',
version = '1.0',
author = "Michał Walkowiak",
author_email = "michal.walkowiak93#gmail.com",
description = "Application in python 3.4 with noSQL BerkleyDB",
packages = find_packages(),
entry_points={
'console_scripts': [
'PracaInzynierska = bin.project.mainApp.MainApp:__init__'
]
},
scripts = [
'bin/project/mainApp/__init__.py',
'bin/project/mainApp/MainApp.py',
'bin/project/mainApp/MainAppFrame.py',
'bin/project/informations/__init__.py',
'bin/project/informations/DisplayInformations.py',
'bin/project/informations/InformationsFrame.py',
'bin/project/calculations/Calculate.py',
'bin/project/calculations/UnitConversion.py',
'bin/project/databaseHandler/__init__.py',
'bin/project/databaseHandler/databaseHandler.py',
'bin/project/databaseMonitoring/__init__.py',
'bin/project/databaseMonitoring/DatabaseFrame.py',
'bin/project/databaseMonitoring/DisplayDatabase.py',
'bin/project/initializeGUI/__init__.py',
'bin/project/initializeGUI/CalculationsFrame.py',
'bin/project/initializeGUI/initGui.py',
'bin/project/interpolation/__init__.py',
'bin/project/interpolation/Interpolate.py',
'bin/project/orificeMethods/__init__.py',
'bin/project/orificeMethods/methodsToCountOrifice.py',
'bin/project/steamMethods/__init__.py',
'bin/project/steamMethods/methodToCountParamsSteam.py',
'bin/project/waterMethods/__init__.py',
'bin/project/waterMethods/methodsToCountParamsWater.py'
]
)
I use setup.py with
python3 setup.py bdist --formats=gztar
It's generate dist folder with tar.gz file but when I unpack it every script is in /bin folder. When I try to run MainApp.py by
python3 MainApp.py
I receive an error:
Traceback (most recent call last):
File "MainApp.py", line 7, in <module>
import bin.project.mainApp.MainAppFrame
ImportError: No module named 'bin'
When I change
import bin.project.mainApp.MainAppFrame
to
import MainAppFrame
it works but it doesn't in Pycharm where localy there are paths to every file.
Is there any option to generate istaller, which after unpack would have the same paths as the orginal project, or it always add all files to one folder?
Here is a solution I used, for a simple GUI (tkinter) program:
Windows: copy the Python folder and create a portable version, then launch the program by creating a shortcut i.e. python.exe ../foo/start.py. Use Nullsoft Installer to create the installation file that will take care of the links, directories and uninstall steps on Windows systems.
Linux: distribute the code, specify the dependencies, create a script that creates links for an executable. Of course, you can browse info on how to create your own package, i.e. for Debian.
All other: similar as for Linux.
That's one of the beauties of Python.
The source for the package is here
I'm installing the package from the index via:
easy_install hackertray
pip install hackertray
easy_install installs images/hacker-tray.png to the following folder:
/usr/local/lib/python2.7/dist-packages/hackertray-1.8-py2.7.egg/images/
While, pip installs it to:
/usr/local/images/
My setup.py is as follows:
from setuptools import setup
setup(name='hackertray',
version='1.8',
description='Hacker News app that sits in your System Tray',
packages=['hackertray'],
data_files=[('images', ['images/hacker-tray.png'])])
My MANIFEST file is:
include images/hacker-tray.png
Don't use data_files with relative paths. Actually, don't use data_files at all, unless you make sure the target paths are absolute ones properly generated in a cross-platform way insted of hard coded values.
Use package_data instead:
setup(
# (...)
package_data={
"hackertray.data": [
"hacker-tray.png",
],
},
)
where hackertray.data is a proper python package (i.e. is a directory that contains a file named __init__.py) and hacker-tray.png is right next to __init__.py.
Here's how it should look:
.
|-- hackertray
| |-- __init__.py
| `-- data
| |-- __init__.py
| `-- hacker-tray.png
`-- setup.py
You can get the full path to the image file using:
from pkg_resources import resource_filename
print os.path.abspath(resource_filename('hackertray.data', 'hacker-tray.png'))
I hope that helps.
PS: Python<2.7 seems to have a bug regarding packaging of the files listed in package_data. Always make sure to have a manifest file if you're using something older than Python 2.7 for packaging. See here for more info: https://groups.google.com/d/msg/python-virtualenv/v5KJ78LP9Mo/OiBqMcYVFYAJ
I have a small python application that I would like to make into a downloadable / installable executable for UNIX-like systems. I am under the impression that setuptools would be the best way to make this happen but somehow this doesn't seem to be a common task.
My directory structure looks like this:
myappname/
|-- setup.py
|-- myappname/
| |-- __init__.py
| |-- myappname.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
| |-- morepython/
| |-- __init__.py
| |-- extrapython1.py
| |-- extrapython2.py
The file which contains if __name__ == "__main__": is myappname.py. This file has a line at the top, import src.mainclassfile.
When this is downloaded, I would like for a user to be able to do something like:
$ python setup.py build
$ python setup.py install
And then it will be an installed executable which they can invoke from anywhere on the command line with:
$ myappname arg1 arg2
The important parts of my setup.py are like:
from setuptools import setup, find_packages
setup(
name='code2flow',
scripts=['myappname/myappname.py'],
package_dir={'myappname': 'myappname'},
packages=find_packages(),
)
Current state
By running:
$ sudo python setup.py install
And then in a new shell:
$ myapp.py
I am getting a No module named error
The problem here is that your package layout is broken.
It happens to work in-place, at least in 2.x. Why? You're not accessing the package as myappname—but the same directory that is that package's directory is also the top-level script directory, so you end up getting any of its siblings via old-style relative import.
Once you install things, of course, you'll end up with the myappname package installed in your site-packages, and then a copy of myappname.py installed somewhere on your PATH, so relative import can't possibly work.
The right way to do this is to put your top-level scripts outside the package (or, ideally, into a bin directory).
Also, your module and your script shouldn't have the same name. (There are ways you can make that work, but… just don't try it.)
So, for example:
myappname/
|-- setup.py
|-- myscriptname.py
|-- myappname/
| |-- __init__.py
| |-- src/
| |-- __init__.py
| |-- mainclassfile.py
Of course so far, all this makes it do is break in in-place mode the exact same way it breaks when installed. But at least that makes things easier to debug, right?
Anyway, your myscriptname.py then has to use an absolute import:
import myappname.src.mainclassfile
And your setup.py has to find the script in the right place:
scripts=['myscriptname.py'],
Finally, if you need some code from myscriptname.py to be accessible inside the module, as well as in the script, the right thing to do is to refactor it into two files—but if that's too difficult for some reason, you can always write a wrapper script.
See Arranging your file and directory structure and related sections in the Hitchhiker's Guide to Packaging for more details.
Also see PEP 328 for details on absolute vs. relative imports (but keep in mind that when it refers to "up to Python 2.5" it really means "up to 2.7", and "starting in 2.6" means "starting in 3.0".
For a few examples of packages that include scripts that get installed this way via setup.py (and, usually, easy_install and pip), see ipython, bpython, modulegraph, py2app, and of course easy_install and pip themselves.