I tried using py2app, but I can't figure out where to put the filename of the file I want to make the standalone for. What command do I need to run? I'm extremely confused...
In your setup.py, you want to do something like this:
from distutils.core import setup
import py2app
setup(name="App name",
version="App version",
options=opts, # see the py2app docs, could be a lot of things
app=[main.py], # This is the standalone script
)
See docs - you pass name of your file to py2applet script.
$ py2applet --make-setup MyApplication.py
Wrote setup.py
$ python setup.py py2app -A
And IMHO - pyInstaller - is the best tool for python binaries building.
Related
I wrote a command-line app using python.
the problem is I want the to user can use the command globally after installed the command-line .
I wrote the command-line, I published the package but I don't know how to make this package globally available for users as system commands.
Example :
pip install forosi
and after that user can globally run this command from everywhere they want . like :
forosi help
I'm going to assume you have the main file you are supposed to run in src/forosi.py in your package directory, but you should be able to adapt this if it's different.
First, you want to rename the script to forosi, without the .py extension.
Second, at the top of the file (now called forosi) add the following:
#!/usr/bin/env python3
... rest of file...
In your setup.py for the package, you need to use the scripts option.
setuptools.setup(
...
scripts=['src/forosi'],
...
)
This is the method that required minimal refactoring of your code. If you happen to have a main() function in one of your python files which is the entrypoint of the script, you can just add the following into your setup.py instead of the above:
setup(
...
entry_points = {
'console_scripts': ['src.forosi:main'],
}
...
)
In either case, to build the package locally, run
python3 setup.py bdist_wheel
This will create a wheel file in the dist/ directory called package_name-version-<info>-.whl. This is the official distribution for pypi packages.
To install this package, run:
pip3 install dist/package_name-version-<info>-.whl
or if you only have one version in the dist folder, just
pip3 install dist/*
Instead of typing
$ python3 Program.py -<flags> arguments, etc
I want to be able to DL the git clone and then be able to type
$ Program -<flags> arguments, etc
# program name without the .py extension
I've seen other programs have .yaml files, req.txt files and dockerized them but I can't find anything that shows me how to do this. All the tutorials and guides have stopped short of how to make them simple command line programs.
I've done all the argparse, etc but I'm looking for a guide or some instruction of how to dockerize it and simply run it without having to nav to the dest folder
If you're thinking about distributing the program, you should probably add CLI entry points in your package's setup.py file.
For example:
Project structure
ROOT/
- setup.py
- src/
- program.py
src/program.py
# program.py
def main():
pass
setup.py
# setup.py
from setuptools import find_packages, setup
setup(
name='my_program',
version='1.0.0',
packages=find_packages(),
entry_points={
'console_scripts': [
'Program=src.program:main'
]
}
)
The important bit is the line 'Program=src.program:main': it associates the name Program (the name to invoke from the command line) with the function main of src/program.py.
Note that this name could be anything - it doesn't necessarily need to be related to your package name, python file names, etc.
You can perform a local installation of your package in order to test this.
From the ROOT directory, type $ pip install -e . Afterwards, typing
$ Program
in the terminal from any directory will execute the main function from src/program.py.
This behaviour is the same if anyone pip installs your package over PyPI, instead of your local installation.
Add the shebang to the top of the file:
#!/bin/python3 # or wherever your python binary is
If you have that, then you could do:
./Program.py -<flags> arguments etc
As the title suggests, I'm trying to make a python script accessible from the command line. I've found libraries like click and argv that make it easy to access arguments passed from the command line, but the user still has to run the script through Python.
Instead of
python /location/to/myscript.py
I want to be able to just do
myscript
from any directory
From what I understand, I can achieve this on my computer by editing my PATH variables. However, I would like to be able to simply do:
pip install myscript
and then access the script by typing myscript from anywhere. Is there some special code I would put in the setup.py?
Use console_scripts to hook to a specific Python method (not a whole executable), setup.py file:
from setuptools import setup
setup(
...
entry_points = {
'console_scripts': ['mybinary=mymodule.command_line:cli'],
},
name='mymodule',
...
)
the command_line.py script would be:
import mymodule
def cli():
print("Hello world!")
and the project directory would look like this:
myproject/
mymodule/
__init__.py
command_line.py
...
setup.py
Setuptools will generate a standalone script ‘shim’ which imports your module and calls the registered function.
That shim allows you to call mybinary directly and ensures it's correctly invoked by python. It provides platform-specific shims (i.e., on Windows it generates a .exe).
See packaging documentation for more details.
You can do this with setuptools
an example of a nice setup.py (say your package requires pandas and numpy):
import setuptools
setuptools.setup(
name='myscript',
version='1.0',
scripts=['./scripts/myscript'],
author='Me',
description='This runs my script which is great.',
packages=['lib.myscript'],
install_requires=[
'setuptools',
'pandas >= 0.22.0',
'numpy >= 1.16.0'
],
python_requires='>=3.5'
)
Your directory should be setup as follows:
[dkennetz package]$ ls
lib scripts setup.py
inside lib would be:
[dkennetz package]$ ls lib
myscript
inside of myscript would be:
[dkennetz package]$ ls lib/myscript
__main__.py
__init__.py
helper_module1.py
helper_module2.py
main would be used to call your function and do whatever you want to do.
inside scripts would be:
[dkennetz package]$ ls scripts
myscript
and the contents of myscript would be:
#!/usr/bin/env bash
if [[ ! $# ]]; then
python3 -m myscript -h
else
python3 -m myscript $#
fi
then to run you do: python setup.py install
which will install your program and all of the dependencies you included in install_requires=[] in your setup.py and install myscript as a command-line module:
[dkennetz ~]$ myscript
Assuming you are in the bash shell and python 3 is installed and you want to be able to do what you are requesting, you will need to append the path of the script file to your PATH variable in your .bash_profile file in your home directory. Also, in your python script file, you need to have something similar to the following as the first line of the script:
#!/usr/bin/env python3
Additionally, you can remove the extension (.py) from the script file, such that, as in my example above, the filename is a script in contrast to script.py.
You will also need to set the permission of your filename to
chmod 755 filename
If you want the script to be accessible system-wide, you will need to modify /etc/profile and add to the bottom of the file:
export PATH=$PATH:/path/to/script
Alternatively, if you move the python script file to /usr/local/bin, it may not be necessary to make any profile changes as this directory is often already in the PATH.
To see the value of PATH issue the following command at the shell
echo $PATH
I know this question is older and for a project using setuptools definitely use Tombart's answer
That said, I have been using poetry and that uses a .toml file and if that's what you use, and since this is likely what others will search for here's how you package a script with a toml file (at least with poetry)
[project.scripts]
myscript = "mybinary=mymodule.command_line:cli"
Not sure if this works for flit or any other package managers but it works for poetry
how can I make setup.py file for my own script? I have to make my script global.
(add it to /usr/bin) so I could run it from console just type: scriptName arguments.
OS: Linux.
EDIT:
Now my script is installable, but how can i make it global? So that i could run it from console just name typing.
EDIT: This answer deals only with installing executable scripts into /usr/bin. I assume you have basic knowledge on how setup.py files work.
Create your script and place it in your project like this:
yourprojectdir/
setup.py
scripts/
myscript.sh
In your setup.py file do this:
from setuptools import setup
# you may need setuptools instead of distutils
setup(
# basic stuff here
scripts = [
'scripts/myscript.sh'
]
)
Then type
python setup.py install
Basically that's it. There's a chance that your script will land not exactly in /usr/bin, but in some other directory. If this is the case, type
python setup.py install --help
and search for --install-scripts parameter and friends.
I know that this question is quite old, but just in case, I post how I solved the problem for myself, that was wanting to setup a package for PyPI, that, when installing it with pip, would install it as a system package, not just for Python.
setup(
# rest of setup
console_scripts={
'console_scripts': [
'<app> = <package>.<app>:main'
]
},
)
Details
I am trying to build a Python multi-file code with PyInstaller. For that I have compiled the code with Cython, and am using .so files generated in place of .py files.
Assuming the 1st file is main.py and the imported ones are file_a.py and file_b.py, I get file_a.so and file_b.so after Cython compilation.
When I put main.py, file_a.so and file_b.so in a folder and run it by "python main.py", it works.
But when I build it with PyInstaller and try to run the executable generated, it throws errors for imports done in file_a and file_b.
How can this be fixed? One solution is to import all standard modules in main.py and this works. But if I do not wish to change my code, what can be the solution?
So I got this to work for you.
Please have a look at Bundling Cython extensions with Pyinstaller
Quick Start:
git clone https://github.com/prologic/pyinstaller-cython-bundling.git
cd pyinstaller-cython-bundling
./dist/build.sh
This produces a static binary:
$ du -h dist/hello
4.2M dist/hello
$ ldd dist/hello
not a dynamic executable
And produces the output:
$ ./dist/hello
Hello World!
FooBar
Basically this came down to producing a simple setup.py that builds the extensions file_a.so and file_b.so and then uses pyinstaller to bundle the application the extensions into a single executeble.
Example setup.py:
from glob import glob
from setuptools import setup
from Cython.Build import cythonize
setup(
name="test",
scripts=glob("bin/*"),
ext_modules=cythonize("lib/*.pyx")
)
Building the extensions:
$ python setup.py develop
Bundling the application:
$ pyinstaller -r file_a.so,dll,file_a.so -r file_b.so,dll,file_b.so -F ./bin/hello
Just in case someone's looking for a quick fix.
I ran into the same situation and found a quick/dirty way to do the job. The issue is that pyinstaller is not adding the necessary libraries in the .exe file that are needed to run your program.
All you need to do is import all the libraries (and the .so files) needed into your main.py file (the file which calls file_a.py and file_b.py). For example, assume that file_a.py uses opencv library (cv2) and file_b.py uses matplotlib library. Now in your main.py file you need to import cv2 and matplotlib as well. Basically, whatever you import in file_a.py and file_b.py, you have to import that in main.py as well. This tells pyinstaller that the program needed these libraries and it includes those libraries in the exe file.